15 research outputs found

    Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms

    Full text link
    [EN] Assistive technologies help all persons with disabilities to improve their accessibility in all aspects of their life. The AIDE European project contributes to the improvement of current assistive technologies by developing and testing a modular and adaptive multimodal interface customizable to the individual needs of people with disabilities. This paper describes the computer vision algorithms part of the multimodal interface developed inside the AIDE European project. The main contribution of this computer vision part is the integration with the robotic system and with the other sensory systems (electrooculography (EOG) and electroencephalography (EEG)). The technical achievements solved herein are the algorithm for the selection of objects using the gaze, and especially the state-of-the-art algorithm for the efficient detection and pose estimation of textureless objects. These algorithms were tested in real conditions, and were thoroughly evaluated both qualitatively and quantitatively. The experimental results of the object selection algorithm were excellent (object selection over 90%) in less than 12 s. The detection and pose estimation algorithms evaluated using the LINEMOD database were similar to the state-of-the-art method, and were the most computationally efficient.The research leading to these results received funding from the European Community's Horizon 2020 programme, AIDE project: "Adaptive Multimodal Interfaces to Assist Disabled People in Daily Activities" (grant agreement No: 645322).Ivorra Martínez, E.; Ortega Pérez, M.; Catalán, JM.; Ezquerro, S.; Lledó, LD.; Garcia-Aracil, N.; Alcañiz Raya, ML. (2018). Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms. Sensors. 18(8). https://doi.org/10.3390/s18082408S18

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Full text link
    [EN] Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 +/- 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.This study was funded by the European Commission under the project AIDE (G.A. no: 645322), Spanish Ministry of Science and Innovation, through the projects PID2019-108310RB-I00 and PLEC2022-009424 and by the Ministry of Universities and European Union, "fnanced by European Union-Next Generation EU" through Margarita Salas grant for the training of young doctors.Catalán, JM.; Trigili, E.; Nann, M.; Blanco-Ivorra, A.; Lauretti, C.; Cordella, F.; Ivorra, E.... (2023). Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). Journal of NeuroEngineering and Rehabilitation. 20(1):1-16. https://doi.org/10.1186/s12984-023-01185-w11620

    Supervised and dynamic neuro-fuzzy systems to classify physiological responses in robot-assisted neurorehabilitation.

    No full text
    This paper presents the application of an Adaptive Resonance Theory (ART) based on neural networks combined with Fuzzy Logic systems to classify physiological reactions of subjects performing robot-assisted rehabilitation therapies. First, the theoretical background of a neuro-fuzzy classifier called S-dFasArt is presented. Then, the methodology and experimental protocols to perform a robot-assisted neurorehabilitation task are described. Our results show that the combination of the dynamic nature of S-dFasArt classifier with a supervisory module are very robust and suggest that this methodology could be very useful to take into account emotional states in robot-assisted environments and help to enhance and better understand human-robot interactions
    corecore